Divergence (computer science)

In computer science, a computation is said to diverge if it does not terminate or terminates in an (unobservable) exceptional state. Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive (always produces an action within a finite amount of time.)

Contents

Definitions

Various subfields of computer science use a varying, but mathematically precise, definitions of what it means for a computation to converge or diverge.

Rewriting

In abstract rewriting a reduction is called convergent if and only if it is both confluent and terminating.[1] The notation tn means term t reduces to normal form n in zero or more reductions, t↓ means t reduces to some normal form in zero or more reductions, and t↑ means t does not reduce to a normal form.

In the lambda calculus an expression is divergent if it has no normal form.[2]

Denotational semantics

In denotational semantics an object function f : AB can be modelled as a mathematical function f : A ∪ {⊥} → B ∪ {⊥} where ⊥ (bottom) indicates that the object function or its argument diverges.

Concurrency theory

In the calculus of communicating sequential processes, divergence is a drastic situation where a process performs an endless series of hidden actions. For example, consider the following process, defined by CSP notation:

Clock = tick \rightarrow Clock

The traces of this process are defined as:

traces(Clock) = \{<>, <tick>, <tick,tick>, \cdots \}=\{tick\}^*

Now, consider the following process, which conceals the tick event of the Clock process:

P=Clock\backslash tick

By definition, P is called a divergent process.

See also

Notes

  1. ^ Baader & Nipkow 1998, p. 9.
  2. ^ Pierce 2002, p. 65.

References